Implement a queue with 2 stacks. Your queue should have an enqueue and a dequeue method and it should be "first in first out" (FIFO).
Optimize for the time cost of calls on your queue. These can be any mix of enqueue and dequeue calls.
Assume you already have a stack implementation and it gives time push and pop.
Let's call our stacks in_stack and out_stack.
For enqueue, we simply push the enqueued item onto in_stack.
For dequeue on an empty out_stack, the oldest item is at the bottom of in_stack. So we dig to the bottom of in_stack by pushing each item one-by-one onto out_stack until we reach the bottom item, which we return.
After moving everything from in_stack to out_stack, the item that was enqueued the 2nd longest ago (after the item we just returned) is at the top of out_stack, the item enqueued 3rd longest ago is just below it, etc. So to dequeue on a non-empty out_stack, we simply return the top item from out_stack.
With that description in mind, let's write some code!
class QueueTwoStacks(object):
def __init__(self):
self.in_stack = []
self.out_stack = []
def enqueue(self, item):
self.in_stack.append(item)
def dequeue(self):
if len(self.out_stack) == 0:
# Move items from in_stack to out_stack, reversing order
while len(self.in_stack) > 0:
newest_in_stack_item = self.in_stack.pop()
self.out_stack.append(newest_in_stack_item)
# If out_stack is still empty, raise an error
if len(self.out_stack) == 0:
raise IndexError("Can't dequeue from empty queue!")
return self.out_stack.pop()
Each enqueue is clearly time, and so is each dequeue when out_stack has items. Dequeue on an empty out_stack is order of the number of items in in_stack at that moment, which can vary significantly.
Notice that the more expensive a dequeue on an empty out_stack is (that is, the more items we have to move from in_stack to out_stack), the more -time dequeues off of a non-empty out_stack it wins us in the future. Once items are moved from in_stack to out_stack they just sit there, ready to be dequeued in O(1) time. An item never moves "backwards" in our data structure.
We might guess that this "averages out" so that in a set of enqueues and dequeues the total cost of all dequeues is actually just . To check this rigorously, we can use the accounting method, counting the time cost per item instead of per enqueue or dequeue.
So let's look at the worst case for a single item, which is the case where it is enqueued and then later dequeued. In this case, the item enters in_stack (costing 1 push), then later moves to out_stack (costing 1 pop and 1 push), then later comes off out_stack to get returned (costing 1 pop).
Each of these 4 pushes and pops is time. So our total cost per item is . Our enqueue and dequeue operations put or fewer items into the system, giving a total runtime of .
People often struggle with the runtime analysis for this one. The trick is to think of the cost per item passing through our queue, rather than the cost per enqueue() and dequeue().
This trick generally comes in handy when you're looking at the time cost of not just one call, but "m" calls.
Do you have an answer?
Wanna review this one again later? Or do you feel like you got it all?
Mark as done Pin for review laterReset editor
Powered by qualified.io